-
Notifications
You must be signed in to change notification settings - Fork 0
test: añadir tests para security, telemetry y logging #112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- test_security.py: 14 tests para security.py (0% -> 100%) - test_telemetry.py: 5 tests para telemetry.py (0% -> 81%) - test_logging.py: 8 tests para logging.py (32% -> 100%) - Cobertura total: 72% -> 96% - 34 tests passing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive test coverage for three core modules (security, telemetry, and logging), increasing overall test coverage from 72% to 96% with 34 passing tests. The tests are well-organized using class-based grouping and cover the main functionality of each module.
- Added 14 tests for security.py covering configuration validation, key generation, and health checks
- Added 5 tests for telemetry.py covering setup and request metrics logging
- Added 8 tests for logging.py covering logger configuration and retrieval
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 13 comments.
| File | Description |
|---|---|
| app/tests/test_telemetry.py | New test file with 5 tests covering telemetry setup and request metrics logging functionality |
| app/tests/test_security.py | New test file with 14 tests covering production config validation, secure key generation, and security health checks |
| app/tests/test_logging.py | New test file with 8 tests covering logging setup with different levels and logger retrieval |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| status_code=500, | ||
| duration_ms=100.0, | ||
| ) | ||
| assert "500" in caplog.text or "GET" in caplog.text |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion here is too weak and doesn't verify the actual logged message. The test should verify that both the status code "500" AND the expected log information are present in the log output. The current assertion with "or" will pass even if only "GET" is found, which doesn't properly validate error logging behavior.
| assert "500" in caplog.text or "GET" in caplog.text | |
| assert "500" in caplog.text and "GET" in caplog.text |
| env = {"CORS_ORIGINS": "*", "API_KEY": "", "SECRET_KEY": ""} | ||
| with patch.dict(os.environ, env, clear=False): | ||
| os.environ.pop("API_KEY", None) | ||
| os.environ.pop("SECRET_KEY", None) |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the API_KEY test, this test has confusing environment manipulation. Setting environment variables to empty strings and then immediately popping them creates unnecessary complexity. Use a cleaner approach with clear=True in patch.dict or only use the pop operation.
| # Verify only one handler exists | ||
| assert len(root.handlers) == 1 |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test may not reliably verify handler cleanup behavior. After calling setup_logging(), the test asserts exactly 1 handler exists, but this doesn't necessarily prove that the dummy_handler was removed - it could be that setup_logging() removes all handlers and adds exactly 1 new one, or it could keep the dummy handler and not add a new one. The test should verify the handler identity or type to ensure proper cleanup occurred.
| # Verify only one handler exists | |
| assert len(root.handlers) == 1 | |
| # Verify only one handler exists and it's not the dummy handler | |
| assert len(root.handlers) == 1 | |
| assert dummy_handler not in root.handlers |
| env = {"SECRET_KEY": "test-secret", "API_KEY": ""} | ||
| with patch.dict(os.environ, env, clear=False): | ||
| os.environ.pop("API_KEY", None) |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test may not properly validate missing API_KEY behavior due to conftest.py setting default test environment variables. Even after popping API_KEY from os.environ, the patch.dict context only patches with the provided env dict and doesn't guarantee that API_KEY remains unset. The test should use clear=True in patch.dict to ensure complete isolation from conftest.py's default environment variables.
| env = {"SECRET_KEY": "test-secret", "API_KEY": ""} | |
| with patch.dict(os.environ, env, clear=False): | |
| os.environ.pop("API_KEY", None) | |
| env = {"SECRET_KEY": "test-secret"} | |
| with patch.dict(os.environ, env, clear=True): |
| with patch.dict(os.environ, {}, clear=False): | ||
| os.environ.pop("LOG_LEVEL", None) |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test may not reliably verify the default INFO level behavior. The conftest.py file sets LOG_LEVEL to "INFO" by default, so even after popping it from os.environ within the patch.dict context, the test may still be affected. Additionally, patch.dict with clear=False and an empty dict doesn't guarantee LOG_LEVEL is unset. Use clear=True or explicitly verify that LOG_LEVEL is not in os.environ after the pop.
| with patch.dict(os.environ, {}, clear=False): | |
| os.environ.pop("LOG_LEVEL", None) | |
| with patch.dict(os.environ, {}, clear=True): |
| with patch.dict(os.environ, env, clear=False): | ||
| os.environ.pop("API_KEY", None) | ||
| os.environ.pop("SECRET_KEY", None) |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to test_missing_api_key, this test may not work correctly due to conftest.py setting default environment variables. The patch.dict with clear=False means the test environment from conftest.py may still interfere. Consider using clear=True to ensure the test properly validates the unhealthy status when API_KEY and SECRET_KEY are truly missing.
| with patch.dict(os.environ, env, clear=False): | |
| os.environ.pop("API_KEY", None) | |
| os.environ.pop("SECRET_KEY", None) | |
| with patch.dict(os.environ, env, clear=True): |
| import os | ||
| from unittest.mock import patch | ||
|
|
||
| import pytest |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'pytest' is not used.
| import pytest |
| import os | ||
| from unittest.mock import patch | ||
|
|
||
| import pytest |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'pytest' is not used.
| import pytest |
| """Tests for telemetry module.""" | ||
|
|
||
| import logging | ||
| from unittest.mock import MagicMock, patch |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'MagicMock' is not used.
Import of 'patch' is not used.
| from unittest.mock import MagicMock, patch |
| import logging | ||
| from unittest.mock import MagicMock, patch | ||
|
|
||
| import pytest |
Copilot
AI
Dec 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'pytest' is not used.
| import pytest |
🚀 Pull Request: Complete Railway Deployment Optimization
📋 Descripción del Cambio
Este PR implementa la solución completa para el problema de crashes de Railway después de 2 minutos, junto con la funcionalidad completa del dashboard administrativo para el sistema bancario NeuroBank FastAPI.
🎯 Problema Solucionado
✅ Solución Implementada
🔧 Cambios Técnicos Implementados
🚂 Railway Deployment
railway.json] Configuración con health checks y restart policiesstart.sh] Script de inicio inteligente con validacionesDockerfile] Optimización single worker + uvloop📊 Admin Dashboard
admin_transactions.html] Panel transacciones completo con Chart.jsadmin_users.html] Gestión usuarios con búsqueda en tiempo realadmin_reports.html] Reportes avanzados con exportación CSV/Excelrouter.py] Conexiones específicas (no más templates genéricos)🔄 CI/CD Pipeline
.github/workflows/production-pipeline.yml] Pipeline de 8 etapas📚 Documentation Suite
HOTFIX_RAILWAY_CRASH.md] Análisis técnico del problema RailwayWORKFLOW.md] Procedimientos de desarrolloGIT_COMMANDS_HOTFIX.md] Comandos de despliegue🧪 Testing & Validation
✅ Funcionalidad Validada
/healthoperativo🔒 Security Checks
⚡ Performance Tests
🎯 Business Impact
🚀 Deployment Instructions
Pre-merge Checklist
RAILWAY_TOKENconfigurado en GitHub SecretsPost-merge Actions
main👥 Review Requirements
🔍 Areas de Focus para Review
railway.jsonystart.sh🎯 Expected Reviewers
📝 Additional Notes
🔄 Future Improvements
📚 Related Documentation
✅ Ready to Merge Criteria
🎉 Este PR convierte NeuroBank FastAPI en una aplicación bancaria de nivel empresarial con despliegue automático y funcionalidad completa!